Whole-Word Recognition from Articulatory Movements for Silent Speech Interfaces

نویسندگان

  • Jun Wang
  • Ashok Samal
  • Jordan R. Green
  • Frank Rudzicz
چکیده

Articulation-based silent speech interfaces convert silently produced speech movements into audible words. These systems are still in their experimental stages, but have significant potential for facilitating oral communication in persons with laryngectomy or speech impairments. In this paper, we report the result of a novel, real-time algorithm that recognizes wholewords based on articulatory movements. This approach differs from prior work that has focused primarily on phoneme-level recognition based on articulatory features. On average, our algorithm missed 1.93 words in a sequence of twenty-five words with an average latency of 0.79 seconds for each word prediction using a data set of 5,500 isolated word samples collected from ten speakers. The results demonstrate the effectiveness of our approach and its potential for building a real-time articulationbased silent speech interface for health applications.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Word Recognition from Continuous Articulatory Movement Time-series Data using Symbolic Representations

Although still in experimental stage, articulation-based silent speech interfaces may have significant potential for facilitating oral communication in persons with voice and speech problems. An articulation-based silent speech interface converts articulatory movement information to audible words. The complexity of speech production mechanism (e.g., coarticulation) makes the conversion a formid...

متن کامل

Silent speech recognition from articulatory movements using deep neural network

Laryngectomee patients lose their ability to produce speech sounds and suffer in their daily communication. There are currently limited communication options for these patients. Silent speech interfaces (SSIs), which recognize speech from articulatory information (i.e., without using audio information), have potential to assist the oral communication of persons with laryngectomy or other speech...

متن کامل

Modeling coarticulation in EMG-based continuous speech recognition

This paper discusses the use of surface electromyography for automatic speech recognition. Electromyographic signals captured at the facial muscles record the activity of the human articulatory apparatus and thus allow to trace back a speech signal even if it is spoken silently. Since speech is captured before it gets airborne, the resulting signal is not masked by ambient noise. The resulting ...

متن کامل

Across-speaker articulatory normalization for speaker-independent silent speech recognition

Silent speech interfaces (SSIs), which recognize speech from articulatory information (i.e., without using audio information), have the potential to enable persons with laryngectomy or a neurological disease to produce synthesized speech with a natural sounding voice using their tongue and lips. Current approaches to SSIs have largely relied on speaker-dependent recognition models to minimize t...

متن کامل

Multiview Representation Learning via Deep CCA for Silent Speech Recognition

Silent speech recognition (SSR) converts non-audio information such as articulatory (tongue and lip) movements to text. Articulatory movements generally have less information than acoustic features for speech recognition, and therefore, the performance of SSR may be limited. Multiview representation learning, which can learn better representations by analyzing multiple information sources simul...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012